There are two different sets of needs that external contest formats strive to satisfy.
- The first is that of contest admins, that for several reasons (storage of old contests, backup, distribution of data) want to export the contest original data (tasks, contestants, ...) together with all data generated during the contest (from the contestants, submissions, user tests, ... and from the system, evaluations, scores, ...). Once a contest has been exported in this format, CMS must be able to reimport it in such a way that the new instance is indistinguishable from the original.
- The second is that of contest creators, that want an environment that helps them design tasks, testcases, and insert the contest data (contestant names and so on). The format needs to be easy to write, understand and modify, and should provide tools to help developing and testing the tasks (automatic generation of testcases, testing of solutions, ...). CMS must be able to import it as a new contest, but also to import it over an already created contest (after updating some data).
CMS provides an exporter cmsContestExporter
and an importer cmsContestImporter
working with a format suitable for the first set of needs. This format comprises a dump of all serializable data regarding the contest in a JSON file, together with the files needed by the contest (testcases, statements, submissions, user tests, ...). The exporter and importer understand also compressed versions of this format (i.e., in a zip or tar file). For more information run
bash
cmsContestExporter -h cmsContestImporter -h
As for the second set of needs, the philosophy is that CMS should not force upon contest creators a particular environment to write contests and tasks. Therefore, we encourage you to write importer and reimporter scripts, modeled upon those we wrote for the environment used in the Italian Olympiads, that can be run with the commands cmsYamlImporter
and cmsYamlReimporter
and inspected at :gh_blob:`cmscontrib/YamlImporter.py and :gh_blob:`cmscontrib/YamlReimporter.py. If you want to use the Italian environment there is a description in the next section, but please be aware that it has severe limitations: for example, many handles are in Italian and the support for complex task types is a bit cumbersome.
You can follow this description looking at this example. A contest is represented in one directory, containing:
- a YAML file named
contest.yaml
, that describes the general contest properties; - for each task
{task_name}
, a YAML file{task_name}.yaml
that describes the task and a directory{task_name}
that contains all the files needed to build the statement of the problem, the input and output cases, the reference solution and (when used) the solution checker.
The exact structure of these files and directories is detailed below. Note that providing confusing input to cmsYamlImporter
can, unexpectedly, confuse it and create inconsistent tasks and/or strange errors. For confusing input we mean parameters and/or files from which it can infer no or multiple task types or score types.
The contest.yaml
file is a plain YAML file, with at least the following keys.
nome_breve
("short name", string): the contest's short name, used for internal reference (and exposed in the URLs); it has to match the name of the directory that serves as contest root.nome
("name", string): the contest's name (description), shown to contestants in the web interface.problemi
("tasks", list of strings): a list of the tasks belonging to this contest; for each of these strings, say{task_name}
, there must be a file named{task_name}.yaml
in the contest directory and a directory called{task_name}
, used to extract information about that task; the order in this list will be the order of the tasks in the web interface.utenti
("users", list of associative arrays): each of the elements of the list describes one user of the contest; the exact structure of the record is describedbelow <externalcontestformats_user-description>
.
The following are optional keys.
inizio
("start", integer): the UNIX timestamp of the beginning of the contest (copied in thestart
field); defaults to zero, meaning that contest times haven't yet been decided.fine
("stop", integer): the UNIX timestamp of the end of the contest (copied in thestop
field); defaults to zero, meaning that contest times haven't yet been decided.token_*
: token parameters for the contest, seeconfiguringacontest_tokens
(the names of the parameters are the same as the internal names described there); by default tokens are disabled.max_*_number
andmin_*_interval
(integers): limitations for the whole contest, seeconfiguringacontest_limitations
(the names of the parameters are the same as the internal names described there); by default they're all unset.
Each contest user (contestant) is described in one element of the utenti
key in the contest.yaml
file. Each record has to contains the following keys.
username
(string): obviously, the username.password
(string): obviusly as before, the user's password.
The following are optional keys.
nome
("name", string): the user real first name; defaults to the empty string.cognome
("surname", string): the user real last name; defaults to the value ofusername
.ip
(string): the IP address from which incoming connections for this user are accepted, seeconfiguringacontest_login
; defaults to0.0.0.0
.fake
(string): when set toTrue
(case-sensitive _string) set thehidden
flag in the user, seeconfiguringacontest_login
; defaults toFalse
.
The task YAML files requires the following keys.
nome_breve
("short name", string): the name used to reference internally to this task; it is exposed in the URLs.nome
("name", string): the long name (title) used in the web interface.n_input
(integer): number of test cases to be evaluated for this task; the actual test cases are retrieved from thetask directory <externalcontestformats_task-directory>
.
The following are optional keys.
timeout
(float): the timeout limit for this task in seconds; defaults to no limitations.memlimit
(integer): the memory limit for this task in megabytes; defaults to no limitations.risultati
("results", string): a comma-separated list of test cases (identified by their numbers, starting from 0) that are marked as public, hence their results are available to contestants even without using tokens.token_*
: token parameters for the task, seeconfiguringacontest_tokens
(the names of the parameters are the same as the internal names described there); by default tokens are disabled.max_*_number
andmin_*_interval
(integers): limitations for the task, seeconfiguringacontest_limitations
(the names of the parameters are the same as the internal names described there); by default they're all unset.outputonly
(boolean): if set to True, the task is created with thetasktypes_outputonly
type; defaults to False.
The following are optional keys that must be present for some task type or score type.
total_value
(float): for tasks using thescoretypes_sum
score type, this is the maximum score for the task and defaults to 100.0; for other score types, the maximum score is computed from thetask directory <externalcontestformats_task-directory>
.infile
andoutfile
(strings): fortasktypes_batch
tasks, these are the file names for the input and output files; default toinput.txt
andoutput.txt
.
The content of the task directory is used both to retrieve the task data and to infer the type of the task.
These are the required files.
testo/testo.pdf
("statement"): the main statement of the problem. It is not yet possible to import several statement associated to different languages.input/input{%d}.txt
andoutput/output{%d}.txt
for all integers{%d}
between 0 (included) andn_input
(excluded): these are of course the input and (one of) the correct output files.
The following are optional files, that must be present for certain task types or score types.
gen/GEN
: in the Italian environment, this file describes the parameters for the input generator: each line not composed entirely by white spaces or comments (comments start with#
and end with the end of the line) represents an input file. Here, it is used, in case it contains specially formatted comments, to signal that the score type isscoretypes_groupmin
. If a line contains only a comment of the form# ST: {score}
then it marks the beginning of a new group assigning at most{score}
points, containing all subsequent testcases until the next special comment. If the file does not exists, or does not contain any special comments, the task is given thescoretypes_sum
score type.sol/grader.{%l}
(where{%l}
here and after means a supported language extension): for tasks of typetasktypes_batch
, it is the piece of code that gets compiled together with the submitted solution, and usually takes care of reading the input and writing the output. If one grader is present, the graders for all supported languages must be provided.sol/*.h
andsol/*lib.pas
: if a grader is present, all other files in thesol
directory that end with.h
orlib.pas
are treated as auxiliary files needed by the compilation of the grader with the submitted solution.cor/correttore
(checker): for tasks of typestasktypes_batch
ortasktypes_outputonly
, if this file is present, it must be the executable that examines the input and both the correct and the contestant's output files and assigns the outcome. If the file is not present, a simple diff is used to compare the correct and the contestant's output files.cor/manager
: for tasks of typetasktypes_communication
, this executable is the program that reads the input and communicates with the user solution.sol/stub.%l
: for tasks of typetasktypes_communication
, this is the piece of code that is compiled together with the user submitted code, and is usually used to manage the communication withmanager
. Again, all supported languages must be present.